# CPU-efficient inference
Josiefied Qwen3 8B Abliterated V1 GGUF
Quantized version of Qwen3-8B, utilizing IQ-DynamicGate ultra-low bit quantization technology to optimize memory efficiency and inference speed
Large Language Model
J
Mungert
559
1
GLM Z1 9B 0414 GGUF
MIT
GLM-Z1-9B-0414 is a bilingual text generation model supporting both Chinese and English, utilizing the GGUF format and suitable for various quantization levels, from BF16 to ultra-low-bit quantization (1-2 bits).
Large Language Model Supports Multiple Languages
G
Mungert
1,598
3
Deepcoder 14B Preview GGUF
MIT
Ultra-low-bit quantization (1-2 bits) model using IQ-DynamicGate technology, suitable for memory-constrained devices and edge computing scenarios
Large Language Model English
D
Mungert
1,764
6
Orpheus 3b 0.1 Ft GGUF
Apache-2.0
An ultra-low bit quantized model optimized based on the Llama-3-8B architecture, utilizing IQ-DynamicGate technology for adaptive 1-2 bit precision quantization, suitable for memory-constrained environments.
Large Language Model English
O
Mungert
1,427
1
Llama 3.1 Nemotron Nano 8B V1 GGUF
Other
An 8B parameter model based on the Llama-3 architecture, optimized for memory usage with IQ-DynamicGate ultra-low bit quantization technology
Large Language Model English
L
Mungert
2,088
4
Mistral Small 3.1 24B Instruct 2503 GGUF
Apache-2.0
This is an instruction-tuned model based on Mistral-Small-3.1-24B-Base-2503, utilizing GGUF format and IQ-DynamicGate ultra-low bit quantization technology.
Large Language Model Supports Multiple Languages
M
Mungert
10.01k
7
Llama 3.1 8B Instruct GGUF
Llama-3.1-8B-Instruct is an instruction-tuned version based on Llama-3-8B, utilizing IQ-DynamicGate technology for ultra-low-bit quantization (1-2 bits), enhancing accuracy while maintaining memory efficiency.
Large Language Model Supports Multiple Languages
L
Mungert
1,073
3
Mistral 7B Instruct V0.2 GGUF
Apache-2.0
Mistral-7B-Instruct-v0.2 is an instruction-tuned model based on the Mistral-7B architecture, supporting text generation tasks, optimized for memory efficiency using IQ-DynamicGate ultra-low bit quantization technology.
Large Language Model
M
Mungert
742
2
Llm Data Textbook Quality Fasttext Classifier V1
MIT
A text classification model built on fasttext, used to determine whether text meets textbook-level data quality, serving as a data filtering tool for large language model training.
Text Classification English
L
kenhktsui
35
4
Featured Recommended AI Models